Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
ACM International Conference Proceeding Series ; 2022.
Article in English | Scopus | ID: covidwho-20244307

ABSTRACT

This paper proposes a deep learning-based approach to detect COVID-19 infections in lung tissues from chest Computed Tomography (CT) images. A two-stage classification model is designed to identify the infection from CT scans of COVID-19 and Community Acquired Pneumonia (CAP) patients. The proposed neural model named, Residual C-NiN uses a modified convolutional neural network (CNN) with residual connections and a Network-in-Network (NiN) architecture for COVID-19 and CAP detection. The model is trained with the Signal Processing Grand Challenge (SPGC) 2021 COVID dataset. The proposed neural model achieves a slice-level classification accuracy of 93.54% on chest CT images and patient-level classification accuracy of 86.59% with class-wise sensitivity of 92.72%, 55.55%, and 95.83% for COVID-19, CAP, and Normal classes, respectively. Experimental results show the benefit of adding NiN and residual connections in the proposed neural architecture. Experiments conducted on the dataset show significant improvement over the existing state-of-the-art methods reported in the literature. © 2022 ACM.

2.
2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 ; : 10407-10420, 2022.
Article in English | Scopus | ID: covidwho-2266927

ABSTRACT

Proper noun compounds, e.g., "Covid vaccine”, convey information in a succinct manner (a "Covid vaccine” is a "vaccine that immunizes against the Covid disease”). These are commonly used in short-form domains, such as news headlines, but are largely ignored in information-seeking applications. To address this limitation, we release a new manually annotated dataset, PRONCI, consisting of 22.5K proper noun compounds along with their free-form semantic interpretations. PRONCI is 60 times larger than prior noun compound datasets and also includes non-compositional examples, which have not been previously explored. We experiment with various neural models for automatically generating the semantic interpretations from proper noun compounds, ranging from few-shot prompting to supervised learning, with varying degrees of knowledge about the constituent nouns. We find that adding targeted knowledge, particularly about the common noun, results in performance gains of upto 2.8%. Finally, we integrate our model generated interpretations with an existing Open IE system and observe an 7.5% increase in yield at a precision of 85%. The dataset and code are available at https://github.com/dair-iitd/pronci. © 2022 Association for Computational Linguistics.

3.
37th International Conference on Image and Vision Computing New Zealand, IVCNZ 2022 ; 13836 LNCS:119-130, 2023.
Article in English | Scopus | ID: covidwho-2249304

ABSTRACT

Annotating medical images for disease detection is often tedious and expensive. Moreover, the available training samples for a given task are generally scarce and imbalanced. These conditions are not conducive for learning effective deep neural models. Hence, it is common to ‘transfer' neural networks trained on natural images to the medical image domain. However, this paradigm lacks in performance due to the large domain gap between the natural and medical image data. To address that, we propose a novel concept of Pre-text Representation Transfer (PRT). In contrast to the conventional transfer learning, which fine-tunes a source model after replacing its classification layers, PRT retains the original classification layers and updates the representation layers through an unsupervised pre-text task. The task is performed with (original, not synthetic) medical images, without utilizing any annotations. This enables representation transfer with a large amount of training data. This high-fidelity representation transfer allows us to use the resulting model as a more effective feature extractor. Moreover, we can also subsequently perform the traditional transfer learning with this model. We devise a collaborative representation based classification layer for the case when we leverage the model as a feature extractor. We fuse the output of this layer with the predictions of a model induced with the traditional transfer learning performed over our pre-text transferred model. The utility of our technique for limited and imbalanced data classification problem is demonstrated with an extensive five-fold evaluation for three large-scale models, tested for five different class-imbalance ratios for CT based COVID-19 detection. Our results show a consistent gain over the conventional transfer learning with the proposed method. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

4.
Information Processing and Management ; 60(1), 2023.
Article in English | Scopus | ID: covidwho-2242256

ABSTRACT

Research on automated social media rumour verification, the task of identifying the veracity of questionable information circulating on social media, has yielded neural models achieving high performance, with accuracy scores that often exceed 90%. However, none of these studies focus on the real-world generalisability of the proposed approaches, that is whether the models perform well on datasets other than those on which they were initially trained and tested. In this work we aim to fill this gap by assessing the generalisability of top performing neural rumour verification models covering a range of different architectures from the perspectives of both topic and temporal robustness. For a more complete evaluation of generalisability, we collect and release COVID-RV, a novel dataset of Twitter conversations revolving around COVID-19 rumours. Unlike other existing COVID-19 datasets, our COVID-RV contains conversations around rumours that follow the format of prominent rumour verification benchmarks, while being different from them in terms of topic and time scale, thus allowing better assessment of the temporal robustness of the models. We evaluate model performance on COVID-RV and three popular rumour verification datasets to understand limitations and advantages of different model architectures, training datasets and evaluation scenarios. We find a dramatic drop in performance when testing models on a different dataset from that used for training. Further, we evaluate the ability of models to generalise in a few-shot learning setup, as well as when word embeddings are updated with the vocabulary of a new, unseen rumour. Drawing upon our experiments we discuss challenges and make recommendations for future research directions in addressing this important problem. © 2022 The Author(s)

SELECTION OF CITATIONS
SEARCH DETAIL